29 research outputs found

    The Good, the Bad and the Submodular: Fairly Allocating Mixed Manna Under Order-Neutral Submodular Preferences

    Full text link
    We study the problem of fairly allocating indivisible goods (positively valued items) and chores (negatively valued items) among agents with decreasing marginal utilities over items. Our focus is on instances where all the agents have simple preferences; specifically, we assume the marginal value of an item can be either 1-1, 00 or some positive integer cc. Under this assumption, we present an efficient algorithm to compute leximin allocations for a broad class of valuation functions we call order-neutral submodular valuations. Order-neutral submodular valuations strictly contain the well-studied class of additive valuations but are a strict subset of the class of submodular valuations. We show that these leximin allocations are Lorenz dominating and approximately proportional. We also show that, under further restriction to additive valuations, these leximin allocations are approximately envy-free and guarantee each agent their maxmin share. We complement this algorithmic result with a lower bound showing that the problem of computing leximin allocations is NP-hard when cc is a rational number

    Into the Unknown: Assigning Reviewers to Papers with Uncertain Affinities

    Full text link
    Peer review cannot work unless qualified and interested reviewers are assigned to each paper. Nearly all automated reviewer assignment approaches estimate real-valued affinity scores for each paper-reviewer pair that act as proxies for the predicted quality of a future review; conferences then assign reviewers to maximize the sum of these values. This procedure does not account for noise in affinity score computation -- reviewers can only bid on a small number of papers, and textual similarity models are inherently probabilistic estimators. In this work, we assume paper-reviewer affinity scores are estimated using a probabilistic model. Using these probabilistic estimates, we bound the scores with high probability and maximize the worst-case sum of scores for a reviewer allocation. Although we do not directly recommend any particular method for estimation of probabilistic affinity scores, we demonstrate how to robustly maximize the sum of scores across multiple different models. Our general approach can be used to integrate a large variety of probabilistic paper-reviewer affinity models into reviewer assignment, opening the door to a much more robust peer review process.Comment: 14 pages, 0 figures. For associated code and data, see https://github.com/justinpayan/RA

    Hard color-singlet exchange in dijet events in proton-proton collisions at root s=13 TeV

    Get PDF
    Events where the two leading jets are separated by a pseudorapidity interval devoid of particle activity, known as jet-gap-jet events, are studied in proton-proton collisions at root s = 13 TeV. The signature is expected from hard color-singlet exchange. Each of the highest transverse momentum (p(T)) jets must have p(T)(jet) > 40 GeV and pseudorapidity 1.4 0.2 GeV in the interval vertical bar eta vertical bar < 1 between the jets are observed in excess of calculations that assume only color-exchange. The fraction of events produced via color-singlet exchange, f(CSE), is measured as a function of p(T)(jet2), the pseudorapidity difference between the two leading jets, and the azimuthal angular separation between the two leading jets. The fraction f(CSE) has values of 0.4-1.0%. The results are compared with previous measurements and with predictions from perturbative quantum chromodynamics. In addition, the first study of jet-gap-jet events detected in association with an intact proton using a subsample of events with an integrated luminosity of 0.40 pb(-1) is presented. The intact protons are detected with the Roman pot detectors of the TOTEM experiment. The f(CSE) in this sample is 2.91 +/- 0.70(stat)(-1.01)(+1.08)(syst) times larger than that for inclusive dijet production in dijets with similar kinematics.Peer reviewe

    General Onganía and the Argentine [Military] Revolution of the Right: Anti-Communism and Morality, 1966-1970

    No full text
    <p><span style="font-size: 12px; font-family: Arial;">Este trabajo analiza la relación entre la retórica de la quinta dictadura militar argentina sobre la juventud, la moralidad y el comunismo, y la campaña cultural realizada por el gobierno de 1966 a 1973. Durante la “Revolución Argentina”, el general Juan Carlos Onganía y su gabinete se centraron en la “inmoralidad” de la juventud a causa de que ellos creían que la amenaza interna del comunismo había degradado los valores católicos tradicionales del país. Por el hecho de construir una cultura moral y espiritual a través de una cruzada contra la inmoralidad, la intervención en las universidades nacionales, la censura y legislación anticomunista, los oficiales conservadores pensaban que ellos podían proteger a la juventud argentina de la infiltración de ideologías izquierdistas y preservar a los futuros líderes de la nación.</span></p><p><span style="font-family: Arial;">_____________________</span></p><p><strong><span style="font-family: Arial;">ABSTRACT:</span></strong></p><p>This project analyzes the relationship between the rhetoric of Argentina’s fifth military dictatorship on youth, morality, and communism, and the government’s cultural campaigns executed from 1966-1973. During “La Revolución Argentina,” General Juan Carlos Onganía and his cabinet targeted the “immorality” of the youth because they believed the internal threat of communism had degraded the country’s traditional Catholic values. By constructing a moral and spiritual culture through a crusade against immorality, intervention in the national universities, censorship and anti-communist legislation, conservative officers thought they could shield Argentina’s youth from further infiltration of leftist ideologies and preserve the nation’s future leaders.</p

    KNR.pdf

    No full text
    Preprint of paper "The k -Nearest Representatives Classifier:<div>A Distance-Based Classifier with Strong Generalization Bounds" to appear in DSAA 2017. Abstract follows:</div><div><div>We define the k-Nearest Representatives (k-NR) classifier, a distance-based classifier similar to the k-nearest neighbors classifier with comparable accuracy in practice, and stronger generalization bounds. Uniform convergence is shown through Rademacher complexity, and generalizability is controlled through regularization. Finite-sample risk bound are also given. Compared to the k-NN, the k-NR requires less memory to store and classification queries may be made more efficiently. Training is also efficient, being polynomial in all parameters, and is accomplished via a simple empirical risk minimization process.</div></div

    Towards Interactive Curation & Automatic Tuning of ML Pipelines

    No full text
    Democratizing Data Science requires a fundamental rethinking of the way data analytics and model discovery is done. Available tools for analyzing massive data sets and curating machine learning models are limited in a number of fundamental ways. First, existing tools require well-trained data scientists to select the appropriate techniques to build models and to evaluate their outcomes. Second, existing tools require heavy data preparation steps and are often too slow to give interactive feedback to domain experts in the model building process, severely limiting the possible interactions. Third, current tools do not provide adequate analysis of statistical risk factors in the model development. In this work, we present the first iteration of QuIC-M (pronounced quick-m), an interactive human-in-the-loop data exploration and model building suite. The goal is to enable domain experts to build the machine learning pipelines an order of magnitude faster than machine learning experts while having model qualities comparable to expert solutions
    corecore